首页> 外文OA文献 >Adaptive Ensemble Learning with Confidence Bounds
【2h】

Adaptive Ensemble Learning with Confidence Bounds

机译:具有置信区间的自适应集成学习

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。
获取外文期刊封面目录资料

摘要

Extracting actionable intelligence from distributed, heterogeneous,correlated and high-dimensional data sources requires run-time processing andlearning both locally and globally. In the last decade, a large number ofmeta-learning techniques have been proposed in which local learners make onlinepredictions based on their locally-collected data instances, and feed thesepredictions to an ensemble learner, which fuses them and issues a globalprediction. However, most of these works do not provide performance guaranteesor, when they do, these guarantees are asymptotic. None of these existing worksprovide confidence estimates about the issued predictions or rate of learningguarantees for the ensemble learner. In this paper, we provide a systematicensemble learning method called Hedged Bandits, which comes with both long run(asymptotic) and short run (rate of learning) performance guarantees. Moreover,our approach yields performance guarantees with respect to the optimal localprediction strategy, and is also able to adapt its predictions in a data-drivenmanner. We illustrate the performance of Hedged Bandits in the context ofmedical informatics and show that it outperforms numerous online and offlineensemble learning methods.
机译:从分布式,异构,相关和高维数据源中提取可操作的情报,需要在本地和全局进行运行时处理和学习。在过去的十年中,已经提出了许多元学习技术,其中本地学习者基于其本地收集的数据实例进行在线预测,并将这些预测提供给集成的学习者,后者将它们融合并发布全局预测。但是,这些作品大多数都不提供性能保证,或者当它们提供时,这些保证是渐近的。这些现有的工作都没有提供关于整体学习者所发布的预测或学习保证率的置信度估计。在本文中,我们提供了一种称为Hedged Bandits的系统集成学习方法,该方法同时具有长期(渐近)和短期(学习率)性能保证。此外,我们的方法相对于最佳的局部预测策略提供了性能保证,并且还能够以数据驱动的方式调整其预测。我们在医疗信息学的背景下说明了对冲强盗的表现,并表明其胜过许多在线和离线集成学习方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号